In order to achieve the precise semantic correlation between image and text, an image text retrieval method based on Feature Enhancement and Semantic Correlation Matching (FESCM) was proposed. Firstly, through the feature enhancement representation module, the multi-head self-attention mechanism was introduced to enhance image region features and text word features to reduce the interference of redundant information to alignment of image region and text word. Secondly, the semantic correlation matching module was used to not only capture the corresponding correlation between locally significant objects by local matching, but also incorporate the image background information into the global image features and achieve accurate global semantic correlation by global matching. Finally, the local matching scores and global matching scores were used to obtain the final matching scores of images and texts. The experimental results show that the FESCM-based image text retrieval method improves the recall sum over the extended visual semantic embedding method by 5.7 and 7.5 percentage points on Flickr8k and Flickr30k benchmark datasets, respectively; the recall sum is improved by 3.7 percentage points over the Two-Stream Hierarchical Similarity Reasoning method on the MS-COCO dataset. The proposed method can effectively improve the accuracy of image text retrieval and realize the semantic connection between image and text.
Aiming at the problems of low sensitivity and high false positive rate caused by various shapes and difficulty in detecting pulmonary nodules by the Computer-Aided Detection (CAD) system of pulmonary nodules, a pulmonary nodule detection algorithm based on attention feature pyramid networks was proposed. In the first stage, a more compact Dual Path Network (DPN) was used as the backbone network, and a Feature Pyramid Network (FPN) was combined for multi-scale prediction to obtain feature information at different levels. At the same time, the Global Attention Mechanism (GAM) was embedded to refine the semantic features to be emphasized in learning and improve the sensitivity of the algorithm. In the second stage, a false positive reduction network was proposed to obtain the final classification prediction results. In the training stage, the focal loss function and various data augmentation techniques were used to deal with the data imbalance problem. Experimental results on the public dataset LUNA16 (LUng Nodule Analysis 2016) show that the Competitive Performance Metric (CPM) of the algorithm only with the first stage reaches 0.908, and after adding the false positive reduction network, the CPM of the algorithm reaches 0.933, which is 1.1 percentage points higher than that of the classic algorithm — Convolutional Neural Network (CNN) based on Maximum Intensity Projection (MIP). And ablation experimental results show that the dual path network, FPN, and GAM are effective in improving the detection sensitivity. The above proves that the proposed two-stage detection algorithm can obtain multi-scale nodule information, improve the sensitivity of pulmonary nodule detection, and reduce the false positive rate.
The pictures taken on hazy days have color distortion and blurry details, which will affect the quality of the pictures to a certain extent. Many deep learning based methods have good results on synthetic homogeneous haze images, but they have poor results on the real nonhomogeneous dehazing dataset introduced in the latest NTIRE (New Trends in Image Restoration and Enhancement) challenge. The main reason is that the non-uniform distribution of haze is complicated, and the texture details are easily lost in the process of dehazing. Moreover, the sample number of this dataset is limited, which is easy to lead to overfitting. Therefore, a Conditional Generative Adversarial Network with Dual-Branch generators (DB-CGAN) was proposed. Among them, in one branch, with U-net used as the basic architecture, through the strategy of "Strengthen-Operate-Subtract", enhancement modules were added to the decoder to enhance the recovery of features in the decoder, and the dense feature fusion was used to build enough connections for non-adjacent levels. In the other branch, a multi-layer residual structure was used to speed up the training of the network, and a large number of channel attention modules were concatenated to extract more high-frequency detailed features as many as possible. Finally, a simple and efficient fusion subnet was used to fuse the two branches. In the experiment, this model is significantly better than the previous Dark Channel Prior (DCP), All-in-One Dehazing Network (AODNet), Gated Context Aggregation Network (GCANet), and Multi-Scale Boosted Dehazing Network (MSBDN) dehazing models in the evaluation index Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM). Experimental results show that the proposed network has better performance on nonhomogeneous dehazing datasets.
Forged and tampered data frames should be identified and filtered out to ensure network security and efficiency. However, the existing schemes usually fail to work when verification devices are attacked or maliciously controlled in the Software Defined Network (SDN). To solve the above problem, a blockchain-based data frame security verification mechanism was proposed. Firstly, a Proof of Frame Forwarding (PoFF) consensus algorithm was designed and used to build a lightweight blockchain system. Then, an efficient data frame security verifying scheme for SDN data frame was proposed on the basis of this blockchain system. Finally, a flexible semi-random verifying scheme was presented to balance the verification efficiency and the resource cost. Simulation results show that compared with the hash chain based verifying scheme, the proposed scheme decreases the missed detection rate significantly when an equal proportion of switches are maliciously controlled. Specifically, when the proportion is 40%, the decrease effect is very obvious, the missed detection rate can still be kept no more than 32% in the basic verification mode, and can be further reduced to 7% with the assistance of the semi-random verifying scheme. Both are much lower than the missed detection rate of 72% in the hash chain based verifying scheme, and the resource overhead and communication cost introduced by the proposed mechanism are within a reasonable range. Additionally, the proposed scheme can still maintain good verification performance and efficiency even when the SDN controller is completely unable to work.
In the simulation of Software Defined Network (SDN), the existing network simulation tools usually do not consider the processing delay of SDN switchs. To make the simulation result more realistic and accurate, a scheme to simulate the processing delay was proposed. First, the scheme divided the process of the switch forwarding into two aspects: inquiry operations on flow table and execution of various actions, and then transferred the two aspects into processing delay by using processor frequency and memory cycle. Measurement and comparison were conducted on the processing delay of switches with different configuration in real and simulation environments. The results show that the simulated processing delay of the proposed method is almost close to that in real environment, it can accurately estimate the processing delay of switches.
Language transmission network is a typical social network, the structure and dynamics of language networks have a significant impact on competition and the spread of the language. Therefore, using the language competition in the same area as the object of study, and then the paper proposed an Agent-based social circles network to build the social network closer to the actual language. The whole network parameters and structural parameters of individual networks have good social network characteristics. Agents in the network could be distributed to a social circles of different size. They can move, born and die, which led to the disconnection of previous links and the establishment of new contacts. Each Agent adopted one of three possible states: monolingual language in X, monolingual language in Y and bilingual language in Z, and transmitted horizontally and vertically. On the basis of the analysis of the language status, attractive parameter, the peak rate of horizontal and vertical transmission, the proportion of speakers on the impact of language competition, the article analyzed the impact of social interaction radius and social mobility on language competition. The simulation results indicate that compared with the static social network model, the proposed model is closer to the actual society, and it can effectively increase the likelihood of coexistence between languages, provides a better environment for the study of the preservation of endangered language maintenance.